Online gradient descent learning algorithm†

نویسندگان

  • Yiming Ying
  • Massimiliano Pontil
چکیده

This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without an explicit regularization term. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. The essential element in our analysis is the interplay between the generalization error and a weighted cumulative error which we define in the paper. We show that, although the algorithm does not involve an explicit RKHS regularization term, choosing the step sizes appropriately can yield competitive error rates with those in the literature. Short Title: Online gradient descent learning

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Designing stable neural identifier based on Lyapunov method

The stability of learning rate in neural network identifiers and controllers is one of the challenging issues which attracts great interest from researchers of neural networks. This paper suggests adaptive gradient descent algorithm with stable learning laws for modified dynamic neural network (MDNN) and studies the stability of this algorithm. Also, stable learning algorithm for parameters of ...

متن کامل

Less Regret via Online Conditioning

We analyze and evaluate an online gradient descent algorithm with adaptive per-coordinate adjustment of learning rates. Our algorithm can be thought of as an online version of batch gradient descent with a diagonal preconditioner. This approach leads to regret bounds that are stronger than those of standard online gradient descent for general online convex optimization problems. Experimentally,...

متن کامل

Forecasting GDP Growth Using ANN Model with Genetic Algorithm

Applying nonlinear models to estimation and forecasting economic models are now becoming more common, thanks to advances in computing technology. Artificial Neural Networks (ANN) models, which are nonlinear local optimizer models, have proven successful in forecasting economic variables. Most ANN models applied in Economics use the gradient descent method as their learning algorithm. However, t...

متن کامل

Learning Rotations Learning rotations with little regret

We describe online algorithms for learning a rotation from pairs of unit vectors in R. We show that the expected regret of our online algorithm compared to the best fixed rotation chosen offline over T iterations is O( √ nT ). We also give a lower bound that proves that this expected regret bound is optimal within a constant factor. This resolves an open problem posed in COLT 2008. Our online a...

متن کامل

Probabilistic Multileave Gradient Descent

Online learning to rank methods aim to optimize ranking models based on user interactions. The dueling bandit gradient descent (DBGD) algorithm is able to effectively optimize linear ranking models solely from user interactions. We propose an extension of DBGD, called probabilistic multileave gradient descent (PMGD) that builds on probabilistic multileave, a recently proposed highly sensitive a...

متن کامل

Accelerating Stochastic Gradient Descent via Online Learning to Sample

Stochastic Gradient Descent (SGD) is one of the most widely used techniques for online optimization in machine learning. In this work, we accelerate SGD by adaptively learning how to sample the most useful training examples at each time step. First, we show that SGD can be used to learn the best possible sampling distribution of an importance sampling estimator. Second, we show that the samplin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007